6 research outputs found
Relevance of Unsupervised Metrics in Task-Oriented Dialogue for Evaluating Natural Language Generation
Automated metrics such as BLEU are widely used in the machine translation
literature. They have also been used recently in the dialogue community for
evaluating dialogue response generation. However, previous work in dialogue
response generation has shown that these metrics do not correlate strongly with
human judgment in the non task-oriented dialogue setting. Task-oriented
dialogue responses are expressed on narrower domains and exhibit lower
diversity. It is thus reasonable to think that these automated metrics would
correlate well with human judgment in the task-oriented setting where the
generation task consists of translating dialogue acts into a sentence. We
conduct an empirical study to confirm whether this is the case. Our findings
indicate that these automated metrics have stronger correlation with human
judgments in the task-oriented setting compared to what has been observed in
the non task-oriented setting. We also observe that these metrics correlate
even better for datasets which provide multiple ground truth reference
sentences. In addition, we show that some of the currently available corpora
for task-oriented language generation can be solved with simple models and
advocate for more challenging datasets
A Frame Tracking Model for Memory-Enhanced Dialogue Systems
Recently, resources and tasks were proposed to go beyond state tracking in
dialogue systems. An example is the frame tracking task, which requires
recording multiple frames, one for each user goal set during the dialogue. This
allows a user, for instance, to compare items corresponding to different goals.
This paper proposes a model which takes as input the list of frames created so
far during the dialogue, the current user utterance as well as the dialogue
acts, slot types, and slot values associated with this utterance. The model
then outputs the frame being referenced by each triple of dialogue act, slot
type, and slot value. We show that on the recently published Frames dataset,
this model significantly outperforms a previously proposed rule-based baseline.
In addition, we propose an extensive analysis of the frame tracking task by
dividing it into sub-tasks and assessing their difficulty with respect to our
model
Frames: A Corpus for Adding Memory to Goal-Oriented Dialogue Systems
This paper presents the Frames dataset (Frames is available at
http://datasets.maluuba.com/Frames), a corpus of 1369 human-human dialogues
with an average of 15 turns per dialogue. We developed this dataset to study
the role of memory in goal-oriented dialogue systems. Based on Frames, we
introduce a task called frame tracking, which extends state tracking to a
setting where several states are tracked simultaneously. We propose a baseline
model for this task. We show that Frames can also be used to study memory in
dialogue management and information presentation through natural language
generation
Influencing the Properties of Latent Spaces
L'apprentissage automatique repose sur l'étude des méthodes de détermination de paramètres de modélisation de données a n d'accomplir une tâche, telle que la classification d'image ou la génération de phrases, pour un jeu de données. Ces paramètres forment des espaces latents qui peuvent démontrer diverses propriétés impactant la performance du modèle d'apprentissage sur la tâche visée.
Permettre a un modèle d'adapter son espace latent avec plus de liberté peut amener à l'amélioration de la performance de ce modèle. Un indicateur valide de la
qualité de l'espace latent engendre par un modèle est la collection des propriétés exprimées par cet espace latent, par exemple en ce qui a trait aux aspects topologiques de la représentation apprise par le modèle.
Nous montrons des résultats appliqués dans des régimes supervisés et non supervisés, et nous augmentons des modèles par plusieurs modes d'interactions inter-couches, comme des connections entre les couches d'un codeur au décodeur
analogue dans un modèle auto encodeur, l'application de transformations a nes à la suite de couches, et l'addition de réseaux de neurones auxiliaires connectés en parallèle. L'e et des méthodes proposés est évalué en mesurant soit la performance
de classification ou la qualité des échantillons génères par ces modèles, ainsi qu'en comparant les courbes d'entrainement des algorithmes. Les modèles et méthodes sont évalués sur des jeux de données d'images populaires, comme MNIST et CIFAR10,
et sont comparés aux méthodes à l' état de l'art pour les tâches accomplies.
Nous développons un modèle qui utilise la puissance générative des auto encodeurs variationnels, enrichi par des connections latérales dans le style des réseaux
escaliers pour application en classification.
Nous proposons une méthode qui permet d'isoler les tâches réalisées par les couches convolutionnelles (soit l'apprentissage de ltres pour l'extraction de traits
et l'agencement topologique nécessaire pour l'apprentissage par les couches subséquentes) en utilisant des couches complètement connectées intercalées avec les couches convolutionnelles.
En n, nous appliquons la technique de normalisation par groupe aux couches du réseau récurrent à l' état de l'art (pixel-rnn) et nous démontrons que cela permet
une augmentation prononcée du taux de convergence du modèle, ainsi qu'une amélioration notable de sa performance totale.Machine learning models rely on learned parameters adapted to a given set of data to perform a task, such as classifying images or generating sentences. These
learned parameters form latent spaces which can adapt various properties, to impact how well the model performs.
Enabling a model to better fit properties of the data in its latent space can improve the performance of the model. One criteria for quality is the set of properties
expressed by the latent space - that is, for example, topological properties of the learned representation.
We develop a model which leverages a variational autoencoder’s generative ability and augments it with the ladder network’s lateral connections for discrimination.
We propose a method to decouple two tasks perfomed by convolutional layers (that of learning useful filters for feature extraction, and that of arranging the
learned filters such that the next layer may train eff
ectively) by using interspersed fully-connected layers.
Finally, we apply batch normalization to the recurrent state of the pixel-rnn layer and show that it significantly improves convergence speed as well as slightly
improving overall performance.
We show results applied in unsupervised and supervised settings, and augment models with various inter-layer interractions, such as encoder-to-decoder connections, affine post-layer transformations, and side-network connections. The effects of the proposed methods are assessed by measuring supervised performance or the quality of samples produced by the model and training curves. Models and methods are tested on popular image datasets such as MNIST and CIFAR10 and are compared to the state-of-the-art on the task they are applied to